Defend Data Poisoning Attacks on Voice Authentication

نویسندگان

چکیده

With the advances in deep learning, speaker verification has achieved very high accuracy and is gaining popularity as a type of biometric authentication option many scenes our daily life, especially growing market web services. Compared to traditional passwords, "vocal passwords" are much more convenient they relieve people from memorizing different passwords. However, new machine learning attacks putting these voice systems at risk. Without strong security guarantee, attackers could access legitimate users' accounts by fooling neural network (DNN) based recognition models. In this paper, we demonstrate an easy-to-implement data poisoning attack system, which can hardly be captured existing defense mechanisms. Thus, propose robust method, called Guardian, convolutional network-based discriminator. The Guardian discriminator integrates series novel techniques including bias reduction, input augmentation, ensemble learning. Our approach able distinguish about 95% attacked normal accounts, effective than approaches with only 60% accuracy.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Random Bit Authentication to Defend IEEE 802.11 DoS Attacks

IEEE 802.11 networks are insecure. Wired Equivalent Privacy (WEP), the security mechanism used in 802.11, was proved to be vulnerable. IEEE 802.11i, the security enhancement, concentrates only on integrity and confidentiality of transmitted frames. Either version did not properly handle the network availability. Because management frames are not authenticated, {802.11, 802.11i} networks are sus...

متن کامل

HMAC-Based Authentication Protocol: Attacks and Improvements

As a response to a growing interest in RFID systems such as Internet of Things technology along with satisfying the security of these networks, proposing secure authentication protocols are indispensable part of the system design. Hence, authentication protocols to increase security and privacy in RFID applications have gained much attention in the literature. In this study, security and privac...

متن کامل

Data Poisoning Attacks against Autoregressive Models

Forecasting models play a key role in money-making ventures in many different markets. Such models are often trained on data from various sources, some of which may be untrustworthy. An actor in a given market may be incentivised to drive predictions in a certain direction to their own benefit. Prior analyses of intelligent adversaries in a machine-learning context have focused on regression an...

متن کامل

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attack...

متن کامل

Some Submodular Data-Poisoning Attacks on Machine Learners

The security community has long recognized the threats of data-poisoning attacks (a.k.a. causative attacks) on machine learning systems [1–6, 9, 10, 12, 16], where an attacker modifies the training data, so that the learning algorithm arrives at a “wrong” model that is useful to the attacker. To quantify the capacity and limits of such attacks, we need to know first how the attacker may modify ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Dependable and Secure Computing

سال: 2023

ISSN: ['1941-0018', '1545-5971', '2160-9209']

DOI: https://doi.org/10.1109/tdsc.2023.3289446